3 research outputs found

    Elementary landscape decomposition of the 0-1 unconstrained quadratic optimization

    Get PDF
    Journal of Heuristics, 19(4), pp.711-728Landscapes’ theory provides a formal framework in which combinatorial optimization problems can be theoretically characterized as a sum of an especial kind of landscape called elementary landscape. The elementary landscape decomposition of a combinatorial optimization problem is a useful tool for understanding the problem. Such decomposition provides an additional knowledge on the problem that can be exploited to explain the behavior of some existing algorithms when they are applied to the problem or to create new search methods for the problem. In this paper we analyze the 0-1 Unconstrained Quadratic Optimization from the point of view of landscapes’ theory. We prove that the problem can be written as the sum of two elementary components and we give the exact expressions for these components. We use the landscape decomposition to compute autocorrelation measures of the problem, and show some practical applications of the decomposition.Spanish Ministry of Sci- ence and Innovation and FEDER under contract TIN2008-06491-C04-01 (the M∗ project). Andalusian Government under contract P07-TIC-03044 (DIRICOM project)

    Convergence Theorems of Estimation of Distribution Algorithms

    No full text
    Estimation of Distribution Algorithms (EDAs) have been proposed as an extension of genetic algorithms. We assume that the function to be optimized is additively decomposed (ADF). The interaction graph of the ADF is used to create exact or approximate factorizations of the Boltzmann distribution. Convergence of the algorithm MN-GIBBS is proven. MN-GIBBS uses a Markov network easily derived from the ADF and Gibbs sampling. The Factorized Distribution Algorithm (FDA) uses a less general representation, a Bayesian network and probabilistic logic sampling (PLS). We shortly describe the algorithm LFDA which learns a Bayesian network from data. The relation between the network computed by LFDA and the optimal network used by FDA is investigated. Convergence of FDA to the optima is shown for finite samples if the factorization fulfills the running intersection property. The sample size is bounded by O(nm ln nm) where n is the size of the problem and m the number of sub-functions. For the proof results from statistical learning theory and Probably Approximately Correct (PAC) learning are used. Numerical experiments show that even for difficult test functions a sample size which scales linearly with n is often sufficient. We also show that a good approximation of the true distribution is not necessary, it suffices to use a factorization where the global optima have a large enough probability. This explains the success of EDAs in practical applications
    corecore